1,566 research outputs found
Monitoring robot actions for error detection and recovery
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed
Asymptotic robustness of Kelly's GLRT and Adaptive Matched Filter detector under model misspecification
A fundamental assumption underling any Hypothesis Testing (HT) problem is
that the available data follow the parametric model assumed to derive the test
statistic. Nevertheless, a perfect match between the true and the assumed data
models cannot be achieved in many practical applications. In all these cases,
it is advisable to use a robust decision test, i.e. a test whose statistic
preserves (at least asymptotically) the same probability density function (pdf)
for a suitable set of possible input data models under the null hypothesis.
Building upon the seminal work of Kent (1982), in this paper we investigate the
impact of the model mismatch in a recurring HT problem in radar signal
processing applications: testing the mean of a set of Complex Elliptically
Symmetric (CES) distributed random vectors under a possible misspecified,
Gaussian data model. In particular, by using this general misspecified
framework, a new look to two popular detectors, the Kelly's Generalized
Likelihood Ration Test (GLRT) and the Adaptive Matched Filter (AMF), is
provided and their robustness properties investigated.Comment: ISI World Statistics Congress 2017 (ISI2017), Marrakech, Morocco,
16-21 July 201
Flexible Decision Control in an Autonomous Trading Agent
An autonomous trading agent is a complex piece of software that must operate in a competitive economic environment and support a research agenda. We describe the structure of decision processes in the MinneTAC trading agent, focusing on the use of evaluators – configurable, composable modules for data analysis and prediction that are chained together at runtime to support agent decision-making. Through a set of examples, we show how this structure supports sales and procurement decisions, and how those decision processes can be modified in useful ways by changing evaluator configurations. To put this work in context, we also report on results of an informal survey of agent design approaches among the competitors in the Trading Agent Competition for Supply Chain Management (TAC SCM).autonomous trading agent;decision processes
An Evolutionary Framework for Determining Heterogeneous Strategies in Multi-Agent Marketplaces
We propose an evolutionary approach for studying the dynamics of interaction of strategic agents that interact in a marketplace. The goal is to learn which agent strategies are most suited by observing the distribution of the agents that survive in the market over extended periods of time. We present experimental results from a simulated market, where multiple service providers compete for customers using different deployment and pricing schemes. The results show that heterogeneous strategies evolve and co-exist in the same market.marketing;simulation;multi-agent systems;complexity economics;trading agents
Predit: A temporal predictive framework for scheduling systems
Scheduling can be formalized as a Constraint Satisfaction Problem (CSP). Within this framework activities belonging to a plan are interconnected via temporal constraints that account for slack among them. Temporal representation must include methods for constraints propagation and provide a logic for symbolic and numerical deductions. In this paper we describe a support framework for opportunistic reasoning in constraint directed scheduling. In order to focus the attention of an incremental scheduler on critical problem aspects, some discrete temporal indexes are presented. They are also useful for the prediction of the degree of resources contention. The predictive method expressed through our indexes can be seen as a Knowledge Source for an opportunistic scheduler with a blackboard architecture
Performance Bounds for Parameter Estimation under Misspecified Models: Fundamental findings and applications
Inferring information from a set of acquired data is the main objective of
any signal processing (SP) method. In particular, the common problem of
estimating the value of a vector of parameters from a set of noisy measurements
is at the core of a plethora of scientific and technological advances in the
last decades; for example, wireless communications, radar and sonar,
biomedicine, image processing, and seismology, just to name a few. Developing
an estimation algorithm often begins by assuming a statistical model for the
measured data, i.e. a probability density function (pdf) which if correct,
fully characterizes the behaviour of the collected data/measurements.
Experience with real data, however, often exposes the limitations of any
assumed data model since modelling errors at some level are always present.
Consequently, the true data model and the model assumed to derive the
estimation algorithm could differ. When this happens, the model is said to be
mismatched or misspecified. Therefore, understanding the possible performance
loss or regret that an estimation algorithm could experience under model
misspecification is of crucial importance for any SP practitioner. Further,
understanding the limits on the performance of any estimator subject to model
misspecification is of practical interest. Motivated by the widespread and
practical need to assess the performance of a mismatched estimator, the goal of
this paper is to help to bring attention to the main theoretical findings on
estimation theory, and in particular on lower bounds under model
misspecification, that have been published in the statistical and econometrical
literature in the last fifty years. Secondly, some applications are discussed
to illustrate the broad range of areas and problems to which this framework
extends, and consequently the numerous opportunities available for SP
researchers.Comment: To appear in the IEEE Signal Processing Magazin
Real-time Tactical and Strategic Sales Management for Intelligent Agents Guided By Economic Regimes
Many enterprises that participate in dynamic markets need to make product pricing and inventory resource utilization decisions in real-time. We describe a family of statistical models that address these needs by combining characterization of the economic environment with the ability to predict future economic conditions to make tactical (short-term) decisions, such as product pricing, and strategic (long-term) decisions, such as level of finished goods inventories. Our models characterize economic conditions, called economic regimes, in the form of recurrent statistical patterns that have clear qualitative interpretations. We show how these models can be used to predict prices, price trends, and the probability of receiving a customer order at a given price. These “regime†models are developed using statistical analysis of historical data, and are used in real-time to characterize observed market conditions and predict the evolution of market conditions over multiple time scales. We evaluate our models using a testbed derived from the Trading Agent Competition for Supply Chain Management (TAC SCM), a supply chain environment characterized by competitive procurement and sales markets, and dynamic pricing. We show how regime models can be used to inform both short-term pricing decisions and longterm resource allocation decisions. Results show that our method outperforms more traditional shortand long-term predictive modeling approaches.dynamic pricing;trading agent competition;agent-mediated electronic commerce;dynamic markets;economic regimes;enabling technologies;price forecasting;supply-chain
Semiparametric Inference and Lower Bounds for Real Elliptically Symmetric Distributions
This paper has a twofold goal. The first aim is to provide a deeper
understanding of the family of the Real Elliptically Symmetric (RES)
distributions by investigating their intrinsic semiparametric nature. The
second aim is to derive a semiparametric lower bound for the estimation of the
parametric component of the model. The RES distributions represent a
semiparametric model where the parametric part is given by the mean vector and
by the scatter matrix while the non-parametric, infinite-dimensional, part is
represented by the density generator. Since, in practical applications, we are
often interested only in the estimation of the parametric component, the
density generator can be considered as nuisance. The first part of the paper is
dedicated to conveniently place the RES distributions in the framework of the
semiparametric group models. The second part of the paper, building on the
mathematical tools previously introduced, the Constrained Semiparametric
Cram\'{e}r-Rao Bound (CSCRB) for the estimation of the mean vector and of the
constrained scatter matrix of a RES distributed random vector is introduced.
The CSCRB provides a lower bound on the Mean Squared Error (MSE) of any robust
-estimator of mean vector and scatter matrix when no a-priori information on
the density generator is available. A closed form expression for the CSCRB is
derived. Finally, in simulations, we assess the statistical efficiency of the
Tyler's and Huber's scatter matrix -estimators with respect to the CSCRB.Comment: This paper has been accepted for publication in IEEE Transactions on
Signal Processin
Detecting and Forecasting Economic Regimes in Multi-Agent Automated Exchanges
We show how an autonomous agent can use observable market conditions to characterize the microeconomic situation of the market and predict future market trends. The agent can use this information to make both tactical decisions, such as pricing, and strategic decisions, such as product mix and production planning. We develop methods to learn dominant market conditions, such as over-supply or scarcity, from historical data using Gaussian mixture models to construct price density functions. We discuss how this model can be combined with real-time observable information to identify the current dominant market condition and to forecast market changes over a planning horizon. We forecast market changes via both a Markov correction-prediction process and an exponential smoother. Empirical analysis shows that the exponential smoother yields more accurate predictions for the current and the next day (supporting tactical decisions), while the Markov correction-prediction process is better for longer term predictions (supporting strategic decisions). Our approach offers more flexibility than traditional regression based approaches, since it does not assume a fixed functional relationship between dependent and independent variables. We validate our methods by presenting experimental results in a case study, the Trading Agent Competition for Supply Chain Management.dynamic pricing;machine learning;market forecasting;Trading agents
- …